Search Results: "francois"

22 August 2016

Zlatan Todori : When you wake up with a feeling

I woke up at 5am. Somehow made myself to soon go back to sleep again. Woke up at 6am. Such is the life of jet-lag. Or I am just getting old for it. But the truth wouldn't be complete with only those assertion. I woke inspired and tired and the same time. Tired because I am doing very time consumable things. Also in the same time very emotional things. AND at the exact same time things that inspire me. On paper, I am technical leader of Purism. In reality, I have insanely good relations with my CEO for such a short time. So good that I for months were not leading the technical shift only, but also I overtook operations (getting orders and delivering them while working with our assembly line to automate most of the tasks in this field). I was playing also as first line of technical support (forums, IRC and email). Actually I was pretty much the only line of support for few months. I was doing some website changes: change some wording, updating bunch of plugins and making it sure all works, resolved (hopefully) Tor and Cloudflare issues for it, annoying caching system for forums, stopped forum spam and so on. I worked on better messaging for Purism public relations. I thought my team to use keys for signing and encryption. I interviewed (and read all mails) for people that were interested in working or helping Purism. In process of doing all that, I maybe wasn't the most speedy person for all our users needs but I hope they understand and forgive me. I was doing all that while I was researching and developing tablets (which ended up not being the most successful campaign but we now do have them as product). I was doing all that while seeing (and resolving) that our kernel builds were failing. Worked on pushing touchpad (not so good but we are still working on) patches upstream (and they ended being upstreamed). While seeing repos being down because of our host. Repos being down because of broken sync with Debian. Repos being down because of our key mis-management. Metadata not working well. PureBrowser getting broken all the time. Tor browser out of date. No real ISO updates. Wrong sources.list entries and so on. And the hardest part on work was, I was doing all this with very limited scope and even more limited resources. So what kept me on, what is pushing me forward and what am I doing? One philosophy - Free software. Let me not explain it as a technical debt. Let me explain it as social movement. In age, where people are "bombed" by media, by all-time lying politicians (which use fear of non-existent threats/terror as model to control population), in age where proprietary corporations are selling your freedom so you can gain temporary convenience the term Free software is like Giordano Bruno in age of Inquisitions. Free software does not only preserve your Freedom to software source usage but it preserves your Freedom to think and think out of the box and not being punished for that. It preserves the Freedom to live - to choose what and when to do, without having the negative impact on your or others people lives. The Freedom to be transparent and to share. Because not only ideas grow with sharing, but we, as human beings, grow as we share. The Freedom to say "NO". NO. I somehow learnt, and personally think, that the Freedom to say NO is the most important Freedom in our lives. No I will not obey some artificially created master that think they can plan and choose my life decision. No I will not negotiate my Freedom for your convenience (also, such Freedom is anyway not real and it is matter of time where you will be blown away by such illusion). No I will not accept your credit because it has STRINGS attached to it which you either don't present or you blur it in mountain of superficial wording. No I will not implant a chip inside me for sake of your research or my convenience. No I will not have social account on media where majority of people are. No, I will not have pacemaker which is a blackbox with proprietary (buggy) software and it harvesting my data without me being able to look at it. Yin-Yang. Yes, I want to collaborate on making world better place for us all. I don't agree with most of people, but that doesn't make them my enemies (although media would like us to feel and think like that). I will try to preserve everyones Freedom as much as I can. Yes I will share with my community and friends. Yes I want to learn from better than I am. Yes I want to have awesome mentors. Yes, I will try to be awesome mentor. Yes, I choose to care and not ignore facts and actions done by me and other people. Yes, I have the right to be imperfect and do mistakes as long as I will aknowledge and work on them. Bugfixing ourselves as humans is the most important task in our lives. As in software, it is very time consumable but also as in software, it is improvement and incredible satisfaction to see better version of yourself, getting more and more features (even if that sometimes means actually getting read of other/bad features). This all is blending with my work at Purism. I spend a lot of time thinking about projects, development and future. I must do that in order not to make grave mistakes. Failing hardware and software is not grave mistake. Serious, but not grave. Grave is if we betray ourselves and our community in pursue for Freedom. We are trying to unify many things - we want to give you security, privacy and FREEDOM with convenience. So I am pushing myself out of comfort zones and also out of conventional and sometimes even my standard way of thinking. I have seen that non-existing infrastructure for PureOS is hurting is a lot but I needed to cope with it to the time where I will be able to say: not anymore, we are starting to build our own infrastructure. I was coping with Cloudflare being assholes to Tor users but now we also shifting away from them. I came to team where people didn't properly understand what and why are we building this. Came to very small and not that efficient team. Now, we employed a dedicated and hard working person on operations (Goran) which I trust. We have dedicated support person (Mladen) which tries hard to work with people. A very creative visual mastermind (Francois). We have a capable Debian Developer (Matthias Klumpp) working on PureOS new infra. We have a capable and dedicated sysadmins (Theo and Stelio) which we didn't even have in past. We are trying to LEVEL UP Free software and unify them in convenient solution which is lead by Joey Hess. We have a hard-working PureOS developer (Hema) who is coping with current non-existent PureOS infra. We have GNOME Boards of Directors person (Jeff) who is trying to light up our image in world (working with James, to try bring some lights into our shadows caused by infinite supply chain delays). We have created Advisory Board for Freedom, Privacy and Security which I don't want to name now as we are preparing to announce soon that (and trust me, we have good people in here). But, the most important thing here is not that they are all capable or cool people. It is the core value in all of them - they care about Freedom and I trust them on their paths. The trust is always important but in Purism it is essential for our work. I built the workflow without time management (everyone spends their time every single day as they see it fit as long as the work gets done). And we don't create insane short deadlines because everyone else thinks it is important (and rarely something is more important than our time freedom). So the trust is built out of knowledge and the knowledge I have about them and their works is because we freely share with no strings attached. Because of them, and other good people from our community I have the energy to sacrifice my entire time for Purism. It is not white and black: CEO and me don't always agree, some members of my team don't always agree with me or I with them, some people in community are very rude, impolite and don't respect our work but even with disagreement everyone in Purism finds agreement at the end (we use facts in our judgments) and all the people who just try to disturb my and mine teams work aren't as efficient as all the lovely words of people who believe in us, who send us words of support and who share ideas and their thoughts with us. There is no more satisfaction for me than reading a personal mail giving us kudos for the work and their understanding of underlaying amount of work and issues. While we are limited with resources we had an occasional outcry from community to help us. Now I want to help them to help me (you see the Freedom of sharing here?). PureOS has now a wiki. It will be a community wiki which is endorsed by Purism as company. Yes you read it right, Purism considers its community part of company (you don't need to get paycheck to be Purism member). That is why a call upon contributors (technical but mostly non-technical too) to help us make PureOS wiki the best resource on net for our needs. Write tutorials for others, gather and put info on wiki, create an ideas page and vote on them so we can see what community wants to see, chat with us so we all understand what, why and how are we working on things. Make it as transparent as possible. Everyone interested please get in touch with our teams by either poking us online (IRC, social accounts) or via emails (our personal or [hr, pr, feedback]@puri.sm. To finish this writing (as it is 8am here and I still want to rest a bit because I will have meetings for 6 hours straight today) - I wanted to share some personal insight into few things from my point of view. I wanted to say despite all the troubles and people who tried to make our time even harder (and it is already hard by all the limitation which come naturally today with our kind of work), we still create products, we still ship them, we still improved step by step, we still hired and we are still building. Keeping all that together and making progress is for me a milestone greater than just creating a technical product. I just hope we will continue and improve our pace so we can start progressing towards my personal great goal - integrate and cooperate with most of FLOSS ecosystem. P.S. yes, I also (finally!) became an official Debian Developer - still didn't have time to sit and properly think and cry (as every good men) about it.

20 August 2016

Francois Marier: Remplacer un disque RAID d fectueux

Traduction de l'article original anglais https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/. Voici la proc dure que j'ai suivi pour remplacer un disque RAID d fectueux sur une machine Debian.

Remplacer le disque Apr s avoir remarqu que /dev/sdb a t expuls de mon RAID, j'ai utilis smartmontools pour identifier le num ro de s rie du disque retirer :
smartctl -a /dev/sdb
Cette information en main, j'ai ferm l'ordinateur, retir le disque d fectueux et mis un nouveau disque vide la place.

Initialiser le nouveau disque Apr s avoir d marr avec le nouveau disque vide, j'ai copi la table de partitions avec parted. Premi rement, j'ai examin la table de partitions sur le disque dur non-d fectueux :
$ parted /dev/sda
unit s
print
et cr une nouvelle table de partitions sur le disque de remplacement :
$ parted /dev/sdb
unit s
mktable gpt
Ensuite j'ai utilis la commande mkpart pour mes 4 partitions et je leur ai toutes donn la m me taille que les partitions quivalentes sur /dev/sda. Finalement, j'ai utilis les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (o X est le num ro de la partition) pour toutes les partitions RAID, avant de v rifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recr er les RAID Pour synchroniser les donn es du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai ex cut les commandes suivantes sur mes partitions RAID1 :
mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4
et j'ai gard un oeil sur le statut de la synchronisation avec :
watch -n 2 cat /proc/mdstat
Pour acc l rer le processus, j'ai utilis le truc suivant :
blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max
Ensuite, j'ai recr ma partition swap RAID0 comme suit :
mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1
Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-cr er compl tement), j'ai d faire deux choses:
  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donn par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourn pour /dev/md1 par la commande mdadm --detail --scan

S'assurer que l'on peut d marrer avec le disque de remplacement Pour tre certain de bien pouvoir d marrer la machine avec n'importe quel des deux disques, j'ai r install le boot loader grub sur le nouveau disque :
grub-install /dev/sdb
avant de red marrer avec les deux disques connect s. Ceci confirme que ma configuration fonctionne bien. Ensuite, j'ai d marr sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque d cidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb). Ce test brise videmment la synchronisation entre les deux disques, donc j'ai d red marrer avec les deux disques connect s et puis r -ajouter /dev/sda tous les RAID1 :
mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4
Une fois le tout fini, j'ai red marrer nouveau avec les deux disques pour confirmer que tout fonctionne bien :
cat /proc/mdstat
et j'ai ensuite ex cuter un test SMART complet sur le nouveau disque :
smartctl -t long /dev/sdb

Francois Marier: Remplacer un disque RAID d fectueux

Traduction de l'article original anglais https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/. Voici la proc dure que j'ai suivi pour remplacer un disque RAID d fectueux sur une machine Debian.

Remplacer le disque Apr s avoir remarqu que /dev/sdb a t expuls de mon RAID, j'ai utilis smartmontools pour identifier le num ro de s rie du disque retirer :
smartctl -a /dev/sdb
Cette information en main, j'ai ferm l'ordinateur, retir le disque d fectueux et mis un nouveau disque vide la place.

Initialiser le nouveau disque Apr s avoir d marr avec le nouveau disque vide, j'ai copi la table de partitions avec parted. Premi rement, j'ai examin la table de partitions sur le disque dur non-d fectueux :
$ parted /dev/sda
unit s
print
et cr une nouvelle table de partitions sur le disque de remplacement :
$ parted /dev/sdb
unit s
mktable gpt
Ensuite j'ai utilis la commande mkpart pour mes 4 partitions et je leur ai toutes donn la m me taille que les partitions quivalentes sur /dev/sda. Finalement, j'ai utilis les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (o X est le num ro de la partition) pour toutes les partitions RAID, avant de v rifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recr er les RAID Pour synchroniser les donn es du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai ex cut les commandes suivantes sur mes partitions RAID1 :
mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4
et j'ai gard un oeil sur le statut de la synchronisation avec :
watch -n 2 cat /proc/mdstat
Pour acc l rer le processus, j'ai utilis le truc suivant :
blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max
Ensuite, j'ai recr ma partition swap RAID0 comme suit :
mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1
Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-cr er compl tement), j'ai d faire deux choses:
  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donn par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourn pour /dev/md1 par la commande mdadm --detail --scan

S'assurer que l'on peut d marrer avec le disque de remplacement Pour tre certain de bien pouvoir d marrer la machine avec n'importe quel des deux disques, j'ai r install le boot loader grub sur le nouveau disque :
grub-install /dev/sdb
avant de red marrer avec les deux disques connect s. Ceci confirme que ma configuration fonctionne bien. Ensuite, j'ai d marr sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque d cidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb). Ce test brise videmment la synchronisation entre les deux disques, donc j'ai d red marrer avec les deux disques connect s et puis r -ajouter /dev/sda tous les RAID1 :
mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4
Une fois le tout fini, j'ai red marrer nouveau avec les deux disques pour confirmer que tout fonctionne bien :
cat /proc/mdstat
et j'ai ensuite ex cuter un test SMART complet sur le nouveau disque :
smartctl -t long /dev/sdb

23 July 2016

Francois Marier: Replacing a failed RAID drive

Here's the complete procedure I followed to replace a failed drive from a RAID array on a Debian machine.

Replace the failed drive After seeing that /dev/sdb had been kicked out of my RAID array, I used smartmontools to identify the serial number of the drive to pull out:
smartctl -a /dev/sdb
Armed with this information, I shutdown the computer, pulled the bad drive out and put the new blank one in.

Initialize the new drive After booting with the new blank drive in, I copied the partition table using parted. First, I took a look at what the partition table looks like on the good drive:
$ parted /dev/sda
unit s
print
and created a new empty one on the replacement drive:
$ parted /dev/sdb
unit s
mktable gpt
then I ran mkpart for all 4 partitions and made them all the same size as the matching ones on /dev/sda. Finally, I ran toggle 1 bios_grub (boot partition) and toggle X raid (where X is the partition number) for all RAID partitions, before verifying using print that the two partition tables were now the same.

Resync/recreate the RAID arrays To sync the data from the good drive (/dev/sda) to the replacement one (/dev/sdb), I ran the following on my RAID1 partitions:
mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4
and kept an eye on the status of this sync using:
watch -n 2 cat /proc/mdstat
In order to speed up the sync, I used the following trick:
blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max
Then, I recreated my RAID0 swap partition like this:
mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1
Because the swap partition is brand new (you can't restore a RAID0, you need to re-create it), I had to update two things:
  • replace the UUID for the swap mount in /etc/fstab, with the one returned by mkswap (or running blkid and looking for /dev/md1)
  • replace the UUID for /dev/md1 in /etc/mdadm/mdadm.conf with the one returned for /dev/md1 by mdadm --detail --scan

Ensuring that I can boot with the replacement drive In order to be able to boot from both drives, I reinstalled the grub boot loader onto the replacement drive:
grub-install /dev/sdb
before rebooting with both drives to first make sure that my new config works. Then I booted without /dev/sda to make sure that everything would be fine should that drive fail and leave me with just the new one (/dev/sdb). This test obviously gets the two drives out of sync, so I rebooted with both drives plugged in and then had to re-add /dev/sda to the RAID1 arrays:
mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4
Once that finished, I rebooted again with both drives plugged in to confirm that everything is fine:
cat /proc/mdstat
Then I ran a full SMART test over the new replacement drive:
smartctl -t long /dev/sdb

11 June 2016

Francois Marier: Cleaning up obsolete config files on Debian and Ubuntu

As part of regular operating system hygiene, I run a cron job which updates package metadata and looks for obsolete packages and configuration files. While there is already some easily available information on how to purge unneeded or obsolete packages and how to clean up config files properly in maintainer scripts, the guidance on how to delete obsolete config files is not easy to find and somewhat incomplete. These are the obsolete conffiles I started with:
$ dpkg-query -W -f='$ Conffiles \n'   grep 'obsolete$'
 /etc/apparmor.d/abstractions/evince ae2a1e8cf5a7577239e89435a6ceb469 obsolete
 /etc/apparmor.d/tunables/ntpd 5519e4c01535818cb26f2ef9e527f191 obsolete
 /etc/apparmor.d/usr.bin.evince 08a12a7e468e1a70a86555e0070a7167 obsolete
 /etc/apparmor.d/usr.sbin.ntpd a00aa055d1a5feff414bacc89b8c9f6e obsolete
 /etc/bash_completion.d/initramfs-tools 7eeb7184772f3658e7cf446945c096b1 obsolete
 /etc/bash_completion.d/insserv 32975fe14795d6fce1408d5fd22747fd obsolete
 /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf 8df3896101328880517f530c11fff877 obsolete
 /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf d81013f5bfeece9858706aed938e16bb obsolete
To get rid of the /etc/bash_completion.d/ files, I first determined what packages they were registered to:
$ dpkg -S /etc/bash_completion.d/initramfs-tools
initramfs-tools: /etc/bash_completion.d/initramfs-tools
$ dpkg -S /etc/bash_completion.d/insserv
initramfs-tools: /etc/bash_completion.d/insserv
and then followed Paul Wise's instructions:
$ rm /etc/bash_completion.d/initramfs-tools /etc/bash_completion.d/insserv
$ apt install --reinstall initramfs-tools insserv
For some reason that didn't work for the /etc/dbus-1/system.d/ files and I had to purge and reinstall the relevant package:
$ dpkg -S /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf
system-config-printer-common: /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf
$ dpkg -S /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf
system-config-printer-common: /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf
$ apt purge system-config-printer-common
$ apt install system-config-printer
The files in /etc/apparmor.d/ were even more complicated to deal with because purging the packages that they come from didn't help:
$ dpkg -S /etc/apparmor.d/abstractions/evince
evince: /etc/apparmor.d/abstractions/evince
$ apt purge evince
$ dpkg-query -W -f='$ Conffiles \n'   grep 'obsolete$'
 /etc/apparmor.d/abstractions/evince ae2a1e8cf5a7577239e89435a6ceb469 obsolete
 /etc/apparmor.d/usr.bin.evince 08a12a7e468e1a70a86555e0070a7167 obsolete
I was however able to get rid of them by also purging the apparmor profile packages that are installed on my machine:
$ apt purge apparmor-profiles apparmor-profiles-extra evince ntp
$ apt install apparmor-profiles apparmor-profiles-extra evince ntp
Not sure why I had to do this but I suspect that these files used to be shipped by one of the apparmor packages and then eventually migrated to the evince and ntp packages directly and dpkg got confused. If you're in a similar circumstance, you want want to search for the file you're trying to get rid of on Google and then you might end up on http://apt-browse.org/ which could lead you to the old package that used to own this file.

8 June 2016

Francois Marier: Simple remote mail queue monitoring

In order to monitor some of the machines I maintain, I rely on a simple email setup using logcheck. Unfortunately that system completely breaks down if mail delivery stops. This is the simple setup I've come up with to ensure that mail doesn't pile up on the remote machine.

Server setup The first thing I did on the server-side is to follow Sean Whitton's advice and configure postfix so that it keeps undelivered emails for 10 days (instead of 5 days, the default):
postconf -e maximal_queue_lifetime=10d
Then I created a new user:
adduser mailq-check
with a password straight out of pwgen -s 32. I gave ssh permission to that user:
adduser mailq-check sshuser
and then authorized my new ssh key (see next section):
sudo -u mailq-check -i
mkdir ~/.ssh/
cat - > ~/.ssh/authorized_keys

Laptop setup On my laptop, the machine from where I monitor the server's mail queue, I first created a new password-less ssh key:
ssh-keygen -t ed25519 -f .ssh/egilsstadir-mailq-check
cat ~/.ssh/egilsstadir-mailq-check.pub
which I then installed on the server. Then I added this cronjob in /etc/cron.d/egilsstadir-mailq-check:
0 2 * * * francois /usr/bin/ssh -i /home/francois/.ssh/egilsstadir-mailq-check mailq-check@egilsstadir mailq   grep -v "Mail queue is empty"
and that's it. I get a (locally delivered) email whenever the mail queue on the server is non-empty. There is a race condition built into this setup since it's possible that the server will want to send an email at 2am. However, all that does is send a spurious warning email in that case and so it's a pretty small price to pay for a dirt simple setup that's unlikely to break.

26 April 2016

Francois Marier: Using DNSSEC and DNSCrypt in Debian

While there is real progress being made towards eliminating insecure HTTP traffic, DNS is a fundamental Internet service that still usually relies on unauthenticated cleartext. There are however a few efforts to try and fix this problem. Here is the setup I use on my Debian laptop to make use of both DNSSEC and DNSCrypt.

DNSCrypt DNSCrypt was created to enable end-users to encrypt the traffic between themselves and their chosen DNS resolver. To switch away from your ISP's default DNS resolver to a DNSCrypt resolver, simply install the dnscrypt-proxy package and then set it as the default resolver either in /etc/resolv.conf:
nameserver 127.0.2.1
if you are using a static network configuration or in /etc/dhcp/dhclient.conf:
supersede domain-name-servers 127.0.2.1;
if you rely on dynamic network configuration via DHCP. There are two things you might want to keep in mind when choosing your DNSCrypt resolver:
  • whether or not they keep any logs of the DNS traffic
  • whether or not they support DNSSEC
I have personally selected a resolver located in Iceland by setting the following in /etc/default/dnscrypt-proxy:
DNSCRYPT_PROXY_RESOLVER_NAME=ns0.dnscrypt.is

DNSSEC While DNSCrypt protects the confidentiality of our DNS queries, it doesn't give us any assurance that the results of such queries are the right ones. In order to authenticate results in that way and prevent DNS poisoning, a hierarchical cryptographic system was created: DNSSEC. In order to enable it, I have setup a local unbound DNSSEC resolver on my machine and pointed /etc/resolv.conf (or /etc/dhcp/dhclient.conf) to my unbound installation at 127.0.0.1. Then I put the following in /etc/unbound/unbound.conf.d/dnscrypt.conf:
server:
    # Remove localhost from the donotquery list
    do-not-query-localhost: no
forward-zone:
    name: "."
    forward-addr: 127.0.2.1@53
to stop unbound from resolving DNS directly and to instead go through the encrypted DNSCrypt proxy.

Reliability In my experience, unbound and dnscrypt-proxy are fairly reliable but they eventually get confused (presumably) by network changes and start returning errors. The ugly but dependable work-around I have found is to create a cronjob at /etc/cron.d/restart-dns.conf that restarts both services once a day:
0 3 * * *    root    /usr/sbin/service dnscrypt-proxy restart
1 3 * * *    root    /usr/sbin/service unbound restart

Captive portals The one remaining problem I need to solve has to do with captive portals. This can be quite annoying when travelling because it requires me to use the portal's DNS resolver in order to connect to the splash screen that unlocks the wifi connection. The dnssec-trigger package looked promising but when I tried it on my jessie laptop, it wasn't particularly reliable. My temporary work-around is to comment out this line in /etc/dhcp/dhclient.conf whenever I need to connect to such annoying wifi networks:
#supersede domain-name-servers 127.0.0.1;
If you've found a better solution to this problem, please leave a comment!

1 April 2016

Francois Marier: How Safe Browsing works in Firefox

Firefox has had support for Google's Safe Browsing since 2005 when it started as a stand-alone Firefox extension. At first it was only available in the USA, but it was opened up to the rest of the world in 2006 and moved to the Google Toolbar. It then got integrated directly into Firefox 2.0 before the public launch of the service in 2007. Many people seem confused by this phishing and malware protection system and while there is a pretty good explanation of how it works on our support site, it doesn't go into technical details. This will hopefully be of interest to those who have more questions about it.

Browsing Protection The main part of the Safe Browsing system is the one that watches for bad URLs as you're browsing. Browsing protection currently protects users from: If a Firefox user attempts to visit one of these sites, a warning page will show up instead, which you can see for yourself here: The first two warnings can be toggled using the browser.safebrowsing.malware.enabled preference (in about:config) whereas the last one is controlled by browser.safebrowsing.enabled.

List updates It would be too slow (and privacy-invasive) to contact a trusted server every time the browser wants to establish a connection with a web server. Instead, Firefox downloads a list of bad URLs every 30 minutes from the server (browser.safebrowsing.provider.google.updateURL) and does a lookup against its local database before displaying a page to the user. Downloading the entire list of sites flagged by Safe Browsing would be impractical due to its size so the following transformations are applied:
  1. each URL on the list is canonicalized,
  2. then hashed,
  3. of which only the first 32 bits of the hash are kept.
The lists that are requested from the Safe Browsing server and used to flag pages as malware/unwanted or phishing can be found in urlclassifier.malwareTable and urlclassifier.phishTable respectively. If you want to see some debugging information in your terminal while Firefox is downloading updated lists, turn on browser.safebrowsing.debug. Once downloaded, the lists can be found in the cache directory:
  • ~/.cache/mozilla/firefox/XXXX/safebrowsing/ on Linux
  • ~/Library/Caches/Firefox/Profiles/XXXX/safebrowsing/ on Mac
  • C:\Users\XXXX\AppData\Local\mozilla\firefox\profiles\XXXX\safebrowsing\ on Windows

Resolving partial hash conflicts Because the Safe Browsing database only contains partial hashes, it is possible for a safe page to share the same 32-bit hash prefix as a bad page. Therefore when a URL matches the local list, the browser needs to know whether or not the rest of the hash matches the entry on the Safe Browsing list. In order resolve such conflicts, Firefox requests from the Safe Browsing server (browser.safebrowsing.provider.mozilla.gethashURL) all of the hashes that start with the affected 32-bit prefix and adds these full-length hashes to its local database. Turn on browser.safebrowsing.debug to see some debugging information on the terminal while these "completion" requests are made. If the current URL doesn't match any of these full hashes, the load proceeds as normal. If it does match one of them, a warning interstitial page is shown and the load is canceled.

Download Protection The second part of the Safe Browsing system protects users against malicious downloads. It was launched in 2011, implemented in Firefox 31 on Windows and enabled in Firefox 39 on Mac and Linux. It roughly works like this:
  1. Download the file.
  2. Check the main URL, referrer and redirect chain against a local blocklist (urlclassifier.downloadBlockTable) and block the download in case of a match.
  3. On Windows, if the binary is signed, check the signature against a local whitelist (urlclassifier.downloadAllowTable) of known good publishers and release the download if a match is found.
  4. If the file is not a binary file then release the download.
  5. Otherwise, send the binary file's metadata to the remote application reputation server (browser.safebrowsing.downloads.remote.url) and block the download if the server indicates that the file isn't safe.
Blocked downloads can be unblocked by right-clicking on them in the download manager and selecting "Unblock". While the download protection feature is automatically disabled when malware protection (browser.safebrowsing.malware.enabled) is turned off, it can also be disabled independently via the browser.safebrowsing.downloads.enabled preference. Note that Step 5 is the only point at which any information about the download is shared with Google. That remote lookup can be suppressed via the browser.safebrowsing.downloads.remote.enabled preference for those users concerned about sending that metadata to a third party.

Types of malware The original application reputation service would protect users against "dangerous" downloads, but it has recently been expanded to also warn users about unwanted software as well as software that's not commonly downloaded. These various warnings can be turned on and off in Firefox through the following preferences:
  • browser.safebrowsing.downloads.remote.block_dangerous
  • browser.safebrowsing.downloads.remote.block_dangerous_host
  • browser.safebrowsing.downloads.remote.block_potentially_unwanted
  • browser.safebrowsing.downloads.remote.block_uncommon
and tested using Google's test page. If you want to see how often each "verdict" is returned by the server, you can have a look at the telemetry results for Firefox Beta.

Privacy One of the most persistent misunderstandings about Safe Browsing is the idea that the browser needs to send all visited URLs to Google in order to verify whether or not they are safe. While this was an option in version 1 of the Safe Browsing protocol (as disclosed in their privacy policy at the time), support for this "enhanced mode" was removed in Firefox 3 and the version 1 server was decommissioned in late 2011 in favor of version 2 of the Safe Browsing API which doesn't offer this type of real-time lookup. Google explicitly states that the information collected as part of operating the Safe Browsing service "is only used to flag malicious activity and is never used anywhere else at Google" and that "Safe Browsing requests won't be associated with your Google Account". In addition, Firefox adds a few privacy protections:
  • Query string parameters are stripped from URLs we check as part of the download protection feature.
  • Cookies set by the Safe Browsing servers to protect the service from abuse are stored in a separate cookie jar so that they are not mixed with regular browsing/session cookies.
  • When requesting complete hashes for a 32-bit prefix, Firefox throws in a number of extra "noise" entries to obfuscate the original URL further.
On balance, we believe that most users will want to keep Safe Browsing enabled, but we also make it easy for users with particular needs to turn it off.

Learn More If you want to learn more about how Safe Browsing works in Firefox, you can find all of the technical details on the Safe Browsing and Application Reputation pages of the Mozilla wiki or you can ask questions on our mailing list. Google provides some interesting statistics about what their systems detect in their transparency report and offers a tool to find out why a particular page has been blocked. Some information on how phishing sites are detected is also available on the Google Security blog, but for more detailed information about all parts of the Safe Browsing system, see the following papers:

7 January 2016

Francois Marier: Streamzap remotes and evdev in MythTV

Modern versions of Linux and MythTV enable infrared remote controls without the need for lirc. Here's how I migrated my Streamzap remote to evdev.

Installing packages In order to avoid conflicts between evdev and lirc, I started by removing lirc and its config:
apt purge lirc
and then I installed this tool:
apt install ir-keytable

Remapping keys While my Streamzap remote works out of the box with kernel 3.16, the keycodes that it sends to Xorg are not the ones that MythTV expects. I therefore copied the existing mapping:
cp /lib/udev/rc_keymaps/streamzap /home/mythtv/
and changed it to this:
0x28c0 KEY_0
0x28c1 KEY_1
0x28c2 KEY_2
0x28c3 KEY_3
0x28c4 KEY_4
0x28c5 KEY_5
0x28c6 KEY_6
0x28c7 KEY_7
0x28c8 KEY_8
0x28c9 KEY_9
0x28ca KEY_ESC
0x28cb KEY_MUTE #  
0x28cc KEY_UP
0x28cd KEY_RIGHTBRACE
0x28ce KEY_DOWN
0x28cf KEY_LEFTBRACE
0x28d0 KEY_UP
0x28d1 KEY_LEFT
0x28d2 KEY_ENTER
0x28d3 KEY_RIGHT
0x28d4 KEY_DOWN
0x28d5 KEY_M
0x28d6 KEY_ESC
0x28d7 KEY_L
0x28d8 KEY_P
0x28d9 KEY_ESC
0x28da KEY_BACK # <
0x28db KEY_FORWARD # >
0x28dc KEY_R
0x28dd KEY_PAGEUP
0x28de KEY_PAGEDOWN
0x28e0 KEY_D
0x28e1 KEY_I
0x28e2 KEY_END
0x28e3 KEY_A
The complete list of all EV_KEY keycodes can be found in the kernel. The following command will write this mapping to the driver:
/usr/bin/ir-keytable w /home/mythtv/streamzap -d /dev/input/by-id/usb-Streamzap__Inc._Streamzap_Remote_Control-event-if00
and they should take effect once MythTV is restarted.

Applying the mapping at boot While the na ve solution is to apply the mapping at boot (for example, by sticking it in /etc/rc.local), that only works if the right modules are loaded before rc.local runs. A much better solution is to write a udev rule so that the mapping is written after the driver is loaded. I created /etc/udev/rules.d/streamzap.rules with the following:
# Configure remote control for MythTV
# https://www.mythtv.org/wiki/User_Manual:IR_control_via_evdev#Modify_key_codes
ACTION=="add", ATTRS idVendor =="0e9c", ATTRS idProduct =="0000", RUN+="/usr/bin/ir-keytable -c -w /home/mythtv/streamzap -D 1000 -P 250 -d /dev/input/by-id/usb-Streamzap__Inc._Streamzap_Remote_Control-event-if00"
and got the vendor and product IDs using:
grep '^[IN]:' /proc/bus/input/devices
The -D and -P parameters control what happens when a button on the remote is held down and the keypress must be repeated. These delays are in milliseconds.

30 December 2015

Francois Marier: Linux kernel module options on Debian

Linux kernel modules often have options that can be set. Here's how to make use of them on Debian-based systems, using the i915 Intel graphics driver as an example. To get the list of all available options:
modinfo -p i915
To check the current value of a particular option:
cat /sys/module/i915/parameters/enable_ppgtt
To give that option a value when the module is loaded, create a new /etc/modprobe.d/i915.conf file and put the following in it:
options i915 enable_ppgtt=0
and then re-generate the initial RAM disks:
update-initramfs -u -k all
Alternatively, that option can be set at boot time on the kernel command line by setting the following in /etc/default/grub:
GRUB_CMDLINE_LINUX="i915.enable_ppgtt=0"
and then updating the grub config:
update-grub2

7 December 2015

Francois Marier: Tweaking Cookies For Privacy in Firefox

Cookies are an important part of the Web since they are the primary mechanism that websites use to maintain user sessions. Unfortunately, they are also abused by surveillance marketing companies to follow you around the Web. Here are a few things you can do in Firefox to protect your privacy.

Cookie Expiry Cookies are sent from the website to your browser via a Set-Cookie HTTP header on the response. It looks like this:
HTTP/1.1 200 OK
Date: Mon, 07 Dec 2015 16:55:43 GMT
Server: Apache
Set-Cookie: SESSIONID=65576c6c64206e6f2c657920756f632061726b636465742065686320646f2165
Content-Length: 2036
Content-Type: text/html;charset=UTF-8
When your browser sees this, it saves that cookie for the given hostname and keeps it until you close the browser. Should a site want to persist their cookie for longer, they can add an Expires attribute:
Set-Cookie: SESSIONID=65576c...; expires=Tue, 06-Dec-2016 22:38:26 GMT
in which case the browser will retain the cookie until the server-provided expiry date (which could be in a few years). Of course, that's if you don't instruct your browser to do things differently.

Third-Party Cookies So far, we've only looked at first-party cookies: the ones set by the website you visit and which are typically used to synchronize your login state with the server. There is however another kind: third-party cookies. These ones are set by the third-party resources that a page loads. For example, if a page loads JavaScript from a third-party ad network, you can be pretty confident that they will set their own cookie in order to build a profile on you and serve you "better and more relevant ads".

Controlling Third-Party Cookies If you'd like to opt out of these, you have a couple of options. The first one is to turn off third-party cookies entirely by going back into the Privacy preferences and selecting "Never" next to the "Accept third-party cookies" setting (network.cookie.cookieBehavior = 1). Unfortunately, turning off third-party cookies entirely tends to break a number of sites which rely on this functionality (for example as part of their for login process). A more forgiving option is to accept third-party cookies only for sites which you have actually visited directly. For example, if you visit Facebook and login, you will get a cookie from them. Then when you visit other sites which include Facebook widgets they will not recognize you unless you allow cookies to be sent in a third-party context. To do that, choose the "From visited" option (network.cookie.cookieBehavior = 3). In addition to this setting, you can also choose to make all third-party cookies automatically expire when you close Firefox by setting the network.cookie.thirdparty.sessionOnly option to true in about:config.

Other Ways to Limit Third-Party Cookies Another way to limit undesirable third-party cookies is to tell the browser to avoid connecting to trackers in the first place. This functionality is now built into Private Browsing mode and enabled by default. To enable it outside of Private Browsing too, simply go into about:config and set privacy.trackingprotection.enabled to true. You could also install the EFF's Privacy Badger add-on which uses heuristics to detect and block trackers, unlike Firefox tracking protection which uses a blocklist of known trackers.

My Recommended Settings On my work computer I currently use the following:
network.cookie.cookieBehavior = 3
network.cookie.lifetimePolicy = 3
network.cookie.lifetime.days = 5
network.cookie.thirdparty.sessionOnly = true
privacy.trackingprotection.enabled = true
which allows me to stay logged into most sites for the whole week (no matter now often I restart Firefox Nightly) while limiting tracking and other undesirable cookies as much as possible.

13 November 2015

Francois Marier: How Tracking Protection works in Firefox

Firefox 42, which was released last week, introduced a new feature in its Private Browsing mode: tracking protection. If you are interested in how this list is put together and then used in Firefox, this post is for you.

Safe Browsing lists There are many possible ways to download URL lists to the browser and check against that list before loading anything. One of those is already implemented as part of our malware and phishing protection. It uses the Safe Browsing v2.2 protocol. In a nutshell, the way that this works is that each URL on the block list is hashed (using SHA-256) and then that list of hashes is downloaded by Firefox and stored into a data structure on disk:
  • ~/.cache/mozilla/firefox/XXXX/safebrowsing/mozstd-track* on Linux
  • ~/Library/Caches/Firefox/Profiles/XXXX/safebrowsing/mozstd-track* on Mac
  • C:\Users\XXXX\AppData\Local\mozilla\firefox\profiles\XXXX\safebrowsing\mozstd-track* on Windows
This sbdbdump script can be used to extract the hashes contained in these files and will output something like this:
$ ~/sbdbdump/dump.py -v .
- Reading sbstore: mozstd-track-digest256
[mozstd-track-digest256] magic 1231AF3B Version 3 NumAddChunk: 1 NumSubChunk: 0 NumAddPrefix: 0 NumSubPrefix: 0 NumAddComplete: 1696 NumSubComplete: 0
[mozstd-track-digest256] AddChunks: 1445465225
[mozstd-track-digest256] SubChunks:
...
[mozstd-track-digest256] addComplete[chunk:1445465225] e48768b0ce59561e5bc141a52061dd45524e75b66cad7d59dd92e4307625bdc5
...
[mozstd-track-digest256] MD5: 81a8becb0903de19351427b24921a772
The name of the blocklist being dumped here (mozstd-track-digest256) is set in the urlclassifier.trackingTable preference which you can find in about:config. The most important part of the output shown above is the addComplete line which contains a hash that we will see again in a later section.

List lookups Once it's time to load a resource, Firefox hashes the URL, as well as a few variations of it, and then looks for it in the local lists. If there's no match, then the load proceeds. If there's a match, then we do an additional check against a pairwise allowlist. The pairwise allowlist (hardcoded in the urlclassifier.trackingWhitelistTable pref) is designed to encode what we call "entity relationships". The list groups related domains together for the purpose of checking whether a load is first or third party (e.g. twitter.com and twimg.com both belong to the same entity). Entries on this list (named mozstd-trackwhite-digest256) look like this:
twitter.com/?resource=twimg.com
which translates to "if you're on the twitter.com site, then don't block resources from twimg.com. If there's a match on the second list, we don't block the load. It's only when we get a match on the first list and not the second one that we go ahead and cancel the network load. If you visit our test page, you will see tracking protection in action with a shield icon in the URL bar. Opening the developer tool console will expose the URL of the resource that was blocked:
The resource at "https://trackertest.org/tracker.js" was blocked because tracking protection is enabled.

Creating the lists The blocklist is created by Disconnect according to their definition of tracking. The Disconnect list is on their Github page, but the copy we use in Firefox is the copy we have in our own repository. Similarly the Disconnect entity list is from here but our copy is in our repository. Should you wish to be notified of any changes to the lists, you can simply subscribe to this Atom feed. To convert this JSON-formatted list into the binary format needed by the Safe Browsing code, we run a custom list generation script whenever the list changes on GitHub. If you run that script locally using the same configuration as our server stack, you can see the conversion from the original list to the binary hashes. Here's a sample entry from the mozstd-track-digest256.log file:
[m] twimg.com >> twimg.com/
[canonicalized] twimg.com/
[hash] e48768b0ce59561e5bc141a52061dd45524e75b66cad7d59dd92e4307625bdc5
and one from mozstd-trackwhite-digest256.log:
[entity] Twitter >> (canonicalized) twitter.com/?resource=twimg.com, hash a8e9e3456f46dbe49551c7da3860f64393d8f9d96f42b5ae86927722467577df
This in combination with the sbdbdump script mentioned earlier, will allow you to audit the contents of the local lists.

Serving the lists The way that the binary lists are served to Firefox is through a custom server component written by Mozilla: shavar. Every hour, Firefox requests updates from shavar.services.mozilla.com. If new data is available, then the whole list is downloaded again. Otherwise, all it receives in return is an empty 204 response. Should you want to play with it and run your own server, follow the installation instructions and then go into about:config to change these preferences to point to your own instance:
browser.trackingprotection.gethashURL
browser.trackingprotection.updateURL
Note that on Firefox 43 and later, these prefs have been renamed to:
browser.safebrowsing.provider.mozilla.gethashURL
browser.safebrowsing.provider.mozilla.updateURL

Learn more If you want to learn more about how tracking protection works in Firefox, you can find all of the technical details on the Mozilla wiki or you can ask questions on our mailing list. Thanks to Tanvi Vyas for reviewing a draft of this post.

17 October 2015

Francois Marier: Introducing reboot-notifier for jessie and stretch

One of the packages that got lost in the transition from Debian wheezy to jessie was the update-notifier-common package which could be used to receive notifications when a reboot is needed (for example, after installing a kernel update). I decided to wrap this piece of functionality along with a simple cron job and create a new package: reboot-notifier. Because it uses the same file (/var/run/reboot-required) to indicate that a reboot is needed, it should work fine with any custom scripts that admins might have written prior to jessie. If you're running sid or strech, all you need to do is:
apt install reboot-notifier
On jessie, you'll need to add the backports repository to /etc/apt/sources.list:
deb http://httpredir.debian.org/debian jessie-backports main

19 September 2015

Francois Marier: Hooking into docking and undocking events to run scripts

In order to automatically update my monitor setup and activate/deactivate my external monitor when plugging my ThinkPad into its dock, I found a way to hook into the ACPI events and run arbitrary scripts. This was tested on a T420 with a ThinkPad Dock Series 3 as well as a T440p with a ThinkPad Ultra Dock. The only requirement is the ThinkPad ACPI kernel module which you can find in the tp-smapi-dkms package in Debian. That's what generates the ibm/hotkey events we will listen for.

Hooking into the events Create the following ACPI event scripts as suggested in this guide. Firstly, /etc/acpi/events/thinkpad-dock:
event=ibm/hotkey LEN0068:00 00000080 00004010
action=su francois -c "/home/francois/bin/external-monitor dock"
Secondly, /etc/acpi/events/thinkpad-undock:
event=ibm/hotkey LEN0068:00 00000080 00004011
action=su francois -c "/home/francois/bin/external-monitor undock"
then restart udev:
sudo service udev restart

Finding the right events To make sure the events are the right ones, lift them off of:
sudo acpi_listen
and ensure that your script is actually running by adding:
logger "ACPI event: $*"
at the begininng of it and then looking in /var/log/syslog for this lines like:
logger: external-monitor undock
logger: external-monitor dock
If that doesn't work for some reason, try using an ACPI event script like this:
event=ibm/hotkey
action=logger %e
to see which event you should hook into.

Using xrandr inside an ACPI event script Because the script will be running outside of your user session, the xrandr calls must explicitly set the display variable (-d). This is what I used:
#!/bin/sh
logger "ACPI event: $*"
xrandr -d :0.0 --output DP2 --auto
xrandr -d :0.0 --output eDP1 --auto
xrandr -d :0.0 --output DP2 --left-of eDP1

14 September 2015

Francois Marier: Setting up a network scanner using SANE

Sharing a scanner over the network using SANE is fairly straightforward. Here's how I shared a scanner on a server (running Debian jessie) with a client (running Ubuntu trusty).

Install SANE The packages you need on both the client and the server are: You should check whether or your scanner is supported by the latest stable release or by the latest development version. In my case, I needed to get a Canon LiDE 220 working so I had to grab the libsane 1.0.25+git20150528-1 package from Debian experimental.

Test the scanner locally Once you have SANE installed, you can test it out locally to confirm that it detects your scanner:
scanimage -L
This should give you output similar to this:
device  genesys:libusb:001:006' is a Canon LiDE 220 flatbed scanner
If that doesn't work, make sure that the scanner is actually detected by the USB stack:
$ lsusb   grep Canon
Bus 001 Device 006: ID 04a9:190f Canon, Inc.
and that its USB ID shows up in the SANE backend it needs:
$ grep 190f /etc/sane.d/genesys.conf 
usb 0x04a9 0x190f
To do a test scan, simply run:
scanimage > test.ppm
and then take a look at the (greyscale) image it produced (test.ppm).

Configure the server With the scanner working locally, it's time to expose it to network clients by adding the client IP addresses to /etc/sane.d/saned.conf:
## Access list
192.168.1.3
and then opening the appropriate port on your firewall (typically /etc/network/iptables in Debian):
-A INPUT -s 192.168.1.3 -p tcp --dport 6566 -j ACCEPT
Then you need to ensure that the SANE server is running by setting the following in /etc/default/saned:
RUN=yes
if you're using the sysv init system, or by running this command:
systemctl enable saned.socket
if using systemd. I actually had to reboot to make saned visible to systemd, so if you still run into these errors:
$ service saned start
Failed to start saned.service: Unit saned.service is masked.
you're probably just one reboot away from getting it to work.

Configure the client On the client, all you need to do is add the following to /etc/sane.d/net.conf:
connect_timeout = 60
myserver
where myserver is the hostname or IP address of the server running saned.

Test the scanner remotely With everything in place, you should be able to see the scanner from the client computer:
$ scanimage -L
device  net:myserver:genesys:libusb:001:006' is a Canon LiDE 220 flatbed scanner
and successfully perform a test scan using this command:
scanimage > test.ppm

29 August 2015

Francois Marier: Letting someone ssh into your laptop using Pagekite

In order to investigate a bug I was running into, I recently had to give my colleague ssh access to my laptop behind a firewall. The easiest way I found to do this was to create an account for him on my laptop and setup a pagekite frontend on my Linode server and a pagekite backend on my laptop.

Frontend setup Setting up my Linode server in order to make the ssh service accessible and proxy the traffic to my laptop was fairly straightforward. First, I had to install the pagekite package (already in Debian and Ubuntu) and open up a port on my firewall by adding the following to both /etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules:
-A INPUT -p tcp --dport 10022 -j ACCEPT
Then I created a new CNAME for my server in DNS:
pagekite.fmarier.org.   3600    IN  CNAME   fmarier.org.
With that in place, I started the pagekite frontend using this command:
pagekite --clean --isfrontend --rawports=virtual --ports=10022 --domain=raw:pagekite.fmarier.org:Password1

Backend setup After installing the pagekite and openssh-server packages on my laptop and creating a new user account:
adduser roc
I used this command to connect my laptop to the pagekite frontend:
pagekite --clean --frontend=pagekite.fmarier.org:10022 --service_on=raw/22:pagekite.fmarier.org:localhost:22:Password1

Client setup Finally, my colleague needed to add the folowing entry to ~/.ssh/config:
Host pagekite.fmarier.org
  CheckHostIP no
  ProxyCommand /bin/nc -X connect -x %h:10022 %h %p
and install the netcat-openbsd package since other versions of netcat don't work. On Fedora, we used netcat-openbsd-1.89 successfully, but this newer package may also work. He was then able to ssh into my laptop via ssh roc@pagekite.fmarier.org.

Making settings permanent I was quite happy settings things up temporarily on the command-line, but it's also possible to persist these settings and to make both the pagekite frontend and backend start up automatically at boot. See the documentation for how to do this on Debian and Fedora.

15 August 2015

Simon Kainz: DUCK challenge: week 6

Well, here are the stats for week 6 of the DUCK challenge: So we had 9 packages fixed and uploaded by 7 different uploaders. A big "Thank You" to you!! Since the start of this challenge, a total of 68 packages, were fixed. Here is a quick overview:
Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7
# Packages 10 15 10 14 10 9 -
Total 10 25 35 49 59 68 -
The list of the fixed and updated packages is availabe here. I will try to update this ~daily. If I missed one of your uploads, please drop me a line. So, assuming that the current rate of packages fixed will be somewhat stable and there will be no additional regessions, the number of packages with issues should be down to 0 in about 209 weeks (~ 4 years ) I just arrived at DebConf15 in Heidelberg, and will try to find all of you who fixed & uploaded packages. If you are one of the guys and see me lingering around, please talk to me and get your lighter! The DUCK Challenge will run until the end of DebConf15, but as there might be some delay by my scripts detecting your upload, please contact my directly. Pevious articles are here: Week 1, Week 2, Week 3, Week 4, Week 5.

1 August 2015

Francois Marier: Setting the wifi regulatory domain on Linux and OpenWRT

The list of available wifi channels is slightly different from country to country. To ensure access to the right channels and transmit power settings, one needs to set the right regulatory domain in the wifi stack.

Linux For most Linux-based computers, you can look and change the current regulatory domain using these commands:
iw reg get
iw reg set CA
where CA is the two-letter country code when the device is located. On Debian and Ubuntu, you can make this setting permanent by putting the country code in /etc/default/crda. Finally, to see the list of channels that are available in the current config, use:
iwlist wlan0 frequency

OpenWRT On OpenWRT-based routers (including derivatives like Gargoyle), looking and setting the regulatory domain temporarily works the same way (i.e. the iw commands above). In order to persist your changes though, you need to use the uci command:
uci set wireless.radio0.country=CA
uci set wireless.radio1.country=CA
uci commit wireless
where wireless.radio0 and wireless.radio1 are the wireless devices specific to your router. You can look them up using:
uci show wireless
To test that it worked, simply reboot the router and then look at the selected regulatory domain:
iw reg get

Scanning the local wifi environment Once your devices are set to the right country, you should scan the local environment to pick the least congested wifi channel. You can use the Kismet spectools (free software) if you have the hardware, otherwise WifiAnalyzer (proprietary) is a good choice on Android (remember to manually set the available channels in the settings).

25 July 2015

Dirk Eddelbuettel: Rcpp 0.12.0: Now with more Big Data!

big-data image A new release 0.12.0 of Rcpp arrived on the CRAN network for GNU R this morning, and I also pushed a Debian package upload. Rcpp has become the most popular way of enhancing GNU R with C++ code. As of today, 423 packages on CRAN depend on Rcpp for making analyses go faster and further. Note that this is 60 more packages since the last release in May! Also, BioConductor adds another 57 packages, and casual searches on GitHub suggests many more. And according to Andrie De Vries, Rcpp has now page rank of one on CRAN as well! And with this release, Rcpp also becomes ready for Big Data, or, as they call it in Texas, Data. Thanks to a lot of work and several pull requests by Qiang Kou, support for R_xlen_t has been added. That means we can now do stunts like
R> library(Rcpp)
R> big <- 2^31-1
R> bigM <- rep(NA, big)
R> bigM2 <- c(bigM, bigM)
R> cppFunction("double getSz(LogicalVector x)   return x.length();  ")
R> getSz(bigM)
[1] 2147483647
R> getSz(bigM2)
[1] 4294967294
R>
where prior versions of Rcpp would just have said
> getSz(bigM2)
Error in getSz(bigM2) :
  long vectors not supported yet: ../../src/include/Rinlinedfuns.h:137
>
which is clearly not Texas-style. Another wellcome change, also thanks to Qiang Kou, adds encoding support for strings. A lot of other things got polished. We are still improving exception handling as we still get the odd curveballs in a corner cases. Matt Dziubinski corrected the var() computation to use the proper two-pass method and added better support for lambda functions in Sugar expression using sapply(), Qiang Kou added more pull requests mostly for string initialization, and Romain added a pull request which made data frame creation a little more robust, and JJ was his usual self in tirelessly looking after all aspects of Rcpp Attributes. As always, you can follow the development via the GitHub repo and particularly the Issue tickets and Pull Requests. And any discussions, questions, ... regarding Rcpp are always welcome at the rcpp-devel mailing list. Last but not least, we are also extremely pleased to annouce that Qiang Kou has joined us in the Rcpp-Core team. We are looking forward to a lot more awesome! See below for a detailed list of changes extracted from the NEWS file.
Changes in Rcpp version 0.12.0 (2015-07-24)
  • Changes in Rcpp API:
    • Rcpp_eval() no longer uses R_ToplevelExec when evaluating R expressions; this should resolve errors where calling handlers (e.g. through suppressMessages()) were not properly respected.
    • All internal length variables have been changed from R_len_t to R_xlen_t to support vectors longer than 2^31-1 elements (via pull request 303 by Qiang Kou).
    • The sugar function sapply now supports lambda functions (addressing issue 213 thanks to Matt Dziubinski)
    • The var sugar function now uses a more robust two-pass method, supports complex numbers, with new unit tests added (via pull request 320 by Matt Dziubinski)
    • String constructors now allow encodings (via pull request 310 by Qiang Kou)
    • String objects are preserving the underlying SEXP objects better, and are more careful about initializations (via pull requests 322 and 329 by Qiang Kou)
    • DataFrame constructors are now a little more careful (via pull request 301 by Romain Francois)
    • For R 3.2.0 or newer, Rf_installChar() is used instead of Rf_install(CHAR()) (via pull request 332).
  • Changes in Rcpp Attributes:
    • Use more robust method of ensuring unique paths for generated shared libraries.
    • The evalCpp function now also supports the plugins argument.
    • Correctly handle signature termination characters (' ' or ';') contained in quotes.
  • Changes in Rcpp Documentation:
    • The Rcpp-FAQ vignette was once again updated with respect to OS X issues and Fortran libraries needed for e.g. RcppArmadillo.
    • The included Rcpp.bib bibtex file (which is also used by other Rcpp* packages) was updated with respect to its CRAN references.
Thanks to CRANberries, you can also look at a diff to the previous release As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

8 June 2015

Craig Small: Checking Cloudflare SSL

My website for a while has used CloudFlare as its front-end. It s a rather nice setup and means my real server gets less of a hammering, which is a good thing. A few months ago they enabled a feature called Universal SSL which I have also added to my site. Around the same time, my SSL check scripts started failing for the website, the certificate had expired apparently many many days ago. Something wasn t right. The Problem The problem was simply at first I d get emails saying The SSL certificate for enc.com.au (CN: ) has expired! . I use a program called ssl-cert-check that would check all (web, smtp, imap) of my certificates. It s very easy to forget to renew and this program runs daily and does a simple check. Running the program on the command line gave some more information, but nothing (for me) that really helped:
$ /usr/bin/ssl-cert-check -s enc.com.au -p 443
Host Status Expires Days
----------------------------------------------- ------------ ------------ ----
unable to load certificate
140364897941136:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:701:Expecting: TRUSTED CERTIFICATE
unable to load certificate
139905089558160:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:701:Expecting: TRUSTED CERTIFICATE
unable to load certificate
140017829234320:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:701:Expecting: TRUSTED CERTIFICATE
unable to load certificate
140567473276560:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:701:Expecting: TRUSTED CERTIFICATE
enc.com.au:443 Expired -2457182
So, apparently, there was something wrong with the certificate. The problem was this was CloudFlare who seem to have a good idea on how to handle certificates and all my browsers were happy. ssl-cert-check is a shell script that uses openssl to make the connection, so the next stop was to see what openssl had to say.
$ echo ""   /usr/bin/openssl s_client -connect enc.com.au:443 CONNECTED(00000003)
140115756086928:error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error:s23_clnt.c:769:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 345 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
---
No peer certificate available. That was the clue I was looking for. Where s my Certificate? CloudFlare Universal SSL uses certificates that have multiple domains in the one certificate. The do this by having one canonical name which is something like sni(numbers).cloudflaressl.com and then multiple Subject Alternative Names (a bit like ServerAlias in apache configurations). This way a single server with a single certificate can serve multiple domains. The way that the client tells the server which website it is looking for is Server Name Indication (SNI). As part of the TLS handshaking the client tells the server I want website www.enc.com.au . The thing is, by default, both openssl s_client and the check script do not use this feature. That was fail the SSL certificate checks were failing. The server was waiting for the client to ask what website it wanted. Modern browsers do this automatically so it just works for them. The Fix For openssl on the command line, there is a flag -servername which does the trick nicely:
$ echo ""   /usr/bin/openssl s_client -connect enc.com.au:443 -servername enc.com.au 
CONNECTED(00000003)
depth=2 C = GB, ST = Greater Manchester, L = Salford, O = COMODO CA Limited, CN = COMODO ECC Certification Authority
verify error:num=20:unable to get local issuer certificate
---
(lots of good SSL type messages)
That was openssl happy now. We asked the server what website we were interested in with the -servername and got the certificate. The fix for ssl-cert-check is even simpler. Like a lot of things once you know the problem, the solution is not only easy to work out but someone has done it for you already. There is a Debian bug report on this problem with a simple fix from Francois Marier. Just edit the check script and change the line that has:
 TLSSERVERNAME="FALSE"
and change it to true. Then the script is happy too:
$ ssl-cert-check -s enc.com.au -p https
Host Status Expires Days
----------------------------------------------- ------------ ------------ ----
enc.com.au:https Valid Sep 30 2015 114
All working and as expected! This isn t really a CloudFlare problem as such, it is just that s the first place I had seen these sort of SNI certificates being used in something I administer (or more correctly something behind the something).

Next.

Previous.